perm filename ROADS[DOC,TOB] blob sn#154776 filedate 1975-04-11 generic text, type C, neo UTF8
COMMENT āŠ—   VALID 00004 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	A Look up the Road to AI
C00010 00003			BETTER SYSTEMS NOW
C00016 00004				REFERENCES
C00017 ENDMK
CāŠ—;
A Look up the Road to AI

This note attempts to identify and describe links between current
artificial intelligence (AI) research and application areas that are
of direct interest to the Department of Defense.  We can approach
such an analysis from at least two directions:

 1) TOP-DOWN:  describe the science-technology-applications chains.
 This is the right approach for making predictions about "What is
 possible?" or "How long will it take?".

 2) BOTTOM-UP: start with descriptions of what is needed and work
 back to the technology base and scientific knowledge that are
 required, thus providing answers to questions like "What research
 and development activities are important to DoD?".

Since most statements about "what is needed" are usually phrased in
terms of current technology, a strictly bottom-up approach will miss
important opportunities created by technological advances.  In the
current situation, we are examining the relevance of some existing
lines of research, which suggests that the topdown approach should be
dominant.  Fortunately, there are several AI texts that proceed
generally in that direction [e.g. Nilsson], though they don't go far
enough down for our current purpose.  We also have Ed Feigenbaum's
overview, which is a bit more specific [Feigenbaum]. 

Given that a substantial amount of topdown analysis already exists,
I choose a predominately bottomup approach in the balance of this
note.  Of course I cannot pose as an expert on all of DoD's needs,
but I did spend nine of my younger years designing (or attempting
to design) command-control and military intelligence systems.
I believe that I have insights on some important technical problems
and some defects in the ways we usually go about trying to solve them.

		WHAT IS THE PROBLEM?

When we think about the likely impact of future technology on
command-control and intelligence (CC&I) systems it is easy to be
convinced that computer and communications hardware will be the
dominant force.  While rapid advances in those areas will certainly
offer a number of new opportunities, they will not solve the major
problems in existing systems.  Software development and maintenance
tasks are dominant from both cost and system performance standpoints.
By "software" I mean both the buggy and obsolete programs and the
buggy and obsolete data files that they are supposed to work with. 

Much of the effort put into CC&I systems has been based on views
similar to the following:
 "In the era of ICBMs, we need more timely information for decision
 making.  Computers process information much faster than people.
 Therefore, new command-control and intelligence systems should be
 developed around computers."
Many hundreds of millions of dollars were invested in that idea,
especially by the Air Force.  What they got were systems that
generally produced less accurate information more slowly and
required more people to operate them than did the earlier manual
methods.

The problem was, and is, that while computer systems are rather
quick and reliable when fed complete and accurate data, they
tend to fall apart when confronted with erroneous or incomplete
data ("garbage in ...").   Extensive programmed checking of inputs
can improve accuracy, but doesn't do much for the speed of updates.

The fundamental problem is that while there are usually only a few
ways of doing a given task correctly, there are an infinite number
of ways of losing.  Given that an opportunity for failure exists,
Murphy's Law does the rest.

People are much more resilient.  They often recognize bad data and
either make decisions based on available information, given suitable
bounds on the uncertainty, or construct a plan to get the additional
information that is needed.  Computers cannot play a major role in
CC&I systems until they develop similar resilience.

Some of the capabilities needed are:
 a) "common sense" reasoning about what is possible in the "real
    world",
 b) ways of representing knowledge and uncertainty that facilitate
    internal consistency checking and deduction,
 c) generation of plans to accomplish a given goal from sets of
    elementary actions.
These are also central tasks of artificial intelligence research.

In summary, bigger and faster computers offer the opportunity to
collect, process, and disseminate increased amounts of bad information.
When substantially better command-control and intelligence systems
are built, they will be based on AI techniques.

		BETTER SYSTEMS NOW

While we cannot solve some of the central problems of CC&I systems
yet, there are several tools available that could be usefully
employed. 

On reading the recent SRI survey of potential DoD applications for AI
[Stevens], I was struck by the similarity of the given list of
problems to lists that were compiled (by others) ten years ago.  Most
of the technology needed to at least partially automate the handling
of these tasks has existed for more than ten years, yet somehow it
hasn't happened.  One of the main reasons, I believe, is the shortage
of good interactive computer facilities in military installations. 

The SRI survey mentions interactive scene analysis for cartography as
a ripe topic.  This would employ AI image understanding techniques to
assist a person in extracting geographic data from photographs.  I
agree that this looks like a good bet, but hope that the development
effort can be kept to a modest scale at least for a while.  I recall
a similar task that was turned into a $40 million boondoggle awhile
back. 

Another possible application area is in data retrieval systems.  The
user interfaces for such systems are usually so complex that they can
only be run by experts.  It should be possible, using natural
language understanding methods, to develop much more comfortable
"front ends" that will answer questions about the kinds of data files
that are available and what fields they contain.  The system would
also assist the user in formulating legal queries and pass them on to
the retrieval programs. 

Certain automatic programming techniques also appear to be applicable
to the data retrieval problem.  For example, queries that require the
linking of data from two or more files might be answered without
requiring the user to know and specify how it is to be done. 

		    MORE LATER

Over a longer period, we can expect much more from AI research and
related work in formal reasoning.  Automatic checking of incoming
information for both internal and external consistency will greatly
enhance the timeliness and accuracy of assimilated data in CC&I
systems. 

Program certification techniques (formal proof of correctness) will
supplant our imperfect debugging procedures. 

Natural language processes will permit textual reports to be treated
as data inputs, rather than just a collection of words. 

As the King of Siam said, "etc., etc., etc..."

		TECHNOLOGY TRANSFER

An important question is "How can we encourage the exploitation of
opportunities created by AI research?" The traditional modes of
technology transfer have been through journal articles, conference
papers, the migration of graduate students to governmental and
industrial groups, and through consulting arrangements.  A number of
university groups "forums" through which industrial groups are
briefed on recent developments. 

I observe that the Japanese government is generally more aggressive
than ours in encouraging the adoption of new technology.  For example,
in the last five years, more than a dozen busloads of Japanese
industrial and university groups have visited the Stanford AI Lab.,
largely under government sponsorship.  In the same period, there
have been only a few taxis-full from U.S. industrial groups.  It is
my impression, in fact, that the Japanese have derived more in direct
benefits from our research than have American organizations.

It appears to me that a more aggressive technology transfer program
would be beneficial here.
			REFERENCES

Feigenbaum, Edward, "Artificial Intelligence Research", in file
AI.RPT [1,EAF] @SU-AI, 1973. 

Nilsson, Nils J., "Problem-solving Methods in Artificial
Intelligence", McGraw-Hill, New York, 1971. 

<Stevens> VOL2.I @SRI-AI (text file).